9 research outputs found

    Treatment tutomatic: Semantics analysis and traduction of the frozen expression with Nooj

    Get PDF
    The purpose of this article is to define more closely the notion of freezing which, despite numerous publications in the field, remains a vague concept. The reflections are the result of a research project that aims to build an almost exhaustive database of fixed verbal expressions in English. After a review of the essential properties of the expressions, it is indicated why each of these criteria is problematic. One of the major problems lies in the existence of related phenomena: from a semantic point of view, fixed expressions participate in the general phenomenon of polysemy, lexical solidarity brings them closer to collocations and finally, morphosyntactic fixity is present in many fixed sentences that are conversational routines and even partly in so- called free syntax

    How can the information system be directed to contribute to the overall performance of administrations status of Moroccan universities

    Get PDF
    Received Jun 12th, 2019 An information system necessarily has a mode of governance. Whatever their nature, there are rules applying to this system. And control mechanisms are usually in place. Thus, it is not a question of creating the governance of the information system, but really about making it a tool of management and improvement. When ranking countries on the e-government scale, Morocco ranks 140 out of 192 UN member states. Yet Morocco had introduced computers in the administration since the early 60s. This ranking, which illustrates the delay by the computer in the Moroccan administration, and therefore the difficulties that Morocco encounters in the improvement performance of its administration, should it not be attributed to a deficit in public management of information systems? The objective of this article is to evaluate the management of information systems in Moroccan public administrations and specifically to dissect the profile of Moroccan universities

    Using isolation forest in anomaly detection: The case of credit card transactions

    Get PDF
    With the evolution of new technology especially in the domain of e-commerce and online banking, the payment by credit card has seen a significant increase. The credit card has become the most used tool for online shopping. This high rate in use brings about fraud and a considerable damage. It is very important to stop fraudulent transactions because they cause huge financial losses over time. The detection of fraudulent transactions is an important application in anomaly detection. There are different approaches to detecting anomalies namely SVM, logistic regression, decision tree and so on. However, they remain limited since they are supervised algorithms that require to be trained by labels in order to know whether the transactions are fraudulent or not. The goal of this paper is to have a credit card fraud detection system which is able to detect the highest number of new transactions in real time with high accuracy. We will also compare, in this paper, different unsupervised techniques for credit card fraud detection namely LOF, one class SVM, K-means and Isolation Forest so as to single out the best approach

    An efficient combination between Berlekamp-Massey and Hartmann Rudolph algorithms to decode BCH codes

    Get PDF
    In digital communication and storage systems, the exchange of data is achieved using a communication channel which is not completely reliable. Therefore, detection and correction of possible errors are required by adding redundant bits to information data. Several algebraic and heuristic decoders were designed to detect and correct errors. The Hartmann Rudolph (HR) algorithm enables to decode a sequence symbol by symbol. The HR algorithm has a high complexity, that's why we suggest using it partially with the algebraic hard decision decoder Berlekamp-Massey (BM). In this work, we propose a concatenation of Partial Hartmann Rudolph (PHR) algorithm and Berlekamp-Massey decoder to decode BCH (Bose-Chaudhuri-Hocquenghem) codes. Very satisfying results are obtained. For example, we have used only 0.54% of the dual space size for the BCH code (63,39,9) while maintaining very good decoding quality. To judge our results, we compare them with other decoders

    A crime prediction model based on spatial and temporal data

    Get PDF
    In a world where data has become precious thanks to what we can do with it like forecasting the future, the fight against crime can also benefit from this technological trend. In this work, we propose a crime prediction model based on historical data that we prepare and transform into spatiotemporal data by crime type, for use in machine learning algorithms and then predict, with maximum accuracy, the risk of having crimes in a spatiotemporal point in the city. And in order to have a general model not related to a specific type of crime, we have described our risk by a vector of n values that represent the risks by type of crime

    Architectural design of trust based recommendation system in customer relationship management

    Get PDF
    Most Companies are more customer centric than they were before. By adopting this strategy, it made the electronic commerce growing and enhance buyers experience. from the other side, Companies started to explore this customer experience -data generated- to extract knowledge about their customer to be well managed – eCRM - instead of classic customer relationship Management - CRM. Large quantity of Data motivated the companies to look for changes, and ask for more functionality, and this are what influenced software editors to adapt their solutions and implement the power of data. Nowadays, data available – Big Data – put the existing systems and architectures under question and push us to rethink the logical layer to explore this data. Following the data vague, puts a need to reconsider and study the strength of eCRM/CRM existing solutions and architectures. The main contribution of this paper is to propose architecture built on Trust-Based recommendation able to provide to companies better accuracy, coverage, novelty and diversity during the sales process

    A new architecture for monitoring land use and land cover change based on remote sensing and GIS: A data mining approach

    Get PDF
    The issue of land use (LU) and land cover change (LCC) has become crucial around the world in recent years, not only for researchers, but also for urban planners and environmentalists who advocate sustainable land use in the future. In Morocco, this phenomenon affects large areas and is all the more pronounced because the climate is arid with cycles of increasing drought and soils are poor and highly vulnerable to erosion. In addition, the precarious living conditions of rural populations pushes them to over exploit natural resources to meet their growing needs, which further amplifies environmental degradation. In this LU/LCC monitoring context, this paper aims on one hand at giving a clear survey of classical methods and techniques used to monitor LU/LCC, on other hand the authors propose a new architecture whose objective is to integer data mining techniques to the LU/LCC monitoring in order to automatically and efficiently improve the monitoring, control and asset management in LU/LC

    An Integrated Ensemble Learning Framework for Predicting Liver Disease

    No full text
    The liver disease has become a pressing global issue, with a sharp increase in cases reported worldwide. Detecting liver disease can be difficult as it often has few noticeable symptoms, which means that by the time it is detected, it may have already progressed to an advanced stage, resulting in many people dying without even realizing they had it. Early detection is crucial as it enables patients to begin treatment earlier, which can potentially save their lives. This study aimed to assess the efficacy of five ensemble machine learning (ML) models, namely RF, XGBoost, Extra Trees, bagging, and stacking methods, in predicting liver disease. It uses the ILPD dataset. To prevent overfitting and biases in the dataset, several pre-processing statistical techniques were employed to handle missing data, outliers, and data balancing. The study’s results underline the importance of using the RFE feature selection method, which allowed the use of only the most relevant features for the model, which may have improved the accuracy and efficiency of the model. The study found that the highest testing accuracy of 93% was achieved by the proposed model, which utilized an improved preprocessing approach and a stacking ensemble classifier with RFE feature selection. The use of ensemble ML has given promising results. Indeed, medical professionals can develop models better equipped to handle the complexity and variability of medical data, resulting in more accurate diagnoses, more effective treatment plans, and better patient outcomes

    Liver Segmentation: A Weakly End-to-End Supervised Model

    No full text
    Liver segmentation in CT images has multiple clinical applications and is expanding in scope. Clinicians can employ segmentation for pathological diagnosis of liver disease, surgical planning, visualization and volumetric assessment to select the appropriate treatment. However, segmentation of the liver is still a challenging task due to the low contrast in medical images, tissue similarity with neighbor abdominal organs and high scale and shape variability. Recently, deep learning models are the state of art in many natural images processing tasks such as detection, classification, and segmentation due to the availability of annotated data. In the medical field, labeled data is limited due to privacy, expert need, and a time-consuming labeling process. In this paper, we present an efficient model combining a selective pre-processing, augmentation, post-processing and an improved SegCaps network. Our proposed model is an end-to-end learning, fully automatic with a good generalization score on such limited amount of training data. The model has been validated on two 3D liver segmentation datasets and have obtained competitive segmentation results
    corecore